knitr::opts_chunk$set(echo = TRUE, warning = FALSE, message = FALSE)
library(tidyverse)
library(here)
library(broom)

# Time series packages
library(tsibble)
library(feasts)
library(fable)

Part 0: Lab set-up

Part 1: Time series wrangling & forecasting

To reinforce skills for wrangling, visualizing, and forecasting with time series data, we will use data on US residential energy consumption from January 1973 - October 2017 (from the US Energy Information Administration).

A. Create a new .Rmd

  • Create a new R Markdown document
  • Remove everything below the first code chunk
  • Attach packages: tidyverse, tsibble, feasts, fable, broom
  • Save the .Rmd in a subfolder you create for your code (you pick the file name)

B. Read in energy data and convert to a tsibble

Read in the energy.csv data (use here(), since it’s in the data subfolder).

energy <- read_csv(here("data", "energy.csv"))

Explore the energy object as it currently exists. Notice that there is a column month that contains the month name, and 4-digit year. Currently, however, R understands that as a character (instead of as a date). Our next step is to convert it into a time series data frame (a tsibble), in two steps:

  1. Add a new column (date) that is the current month column converted to a time series class, yearmonth
  2. Convert the data frame to a tsibble, with that date column as the time index

Here’s what that looks like in a piped sequence:

energy_ts <- energy %>% 
  mutate(date = tsibble::yearmonth(month)) %>% 
  as_tsibble(key = NULL, index = date) ## tells it to treat the data as a time series tibble

Now that it’s stored as a tsibble, we can start visualizing, exploring and working with it a bit easier.

C. Exploratory time series visualization

Raw data graph

Exploratory data visualization is critical no matter what type of data we’re working with, including time series data.

Let’s take a quick look at our tsibble (for residential energy use, in trillion BTU):

ggplot(data = energy_ts, aes(x = date, y = res_total)) +
  geom_line() +
  labs(y = "Residential energy consumption \n (Trillion BTU)")

Looks like there are some interesting things happening. We should ask:

  • Is there an overall trend? —– Overall increasing trend overall, but stability (and possibly a slight decreasing trend) starting around 2005
  • Is there seasonality? —– - Clear seasonality, with a dominant seasonal feature and also a secondary peak each year - that secondary peak has increased substantially
  • Any cyclicality evident? Any other notable patterns, outliers, etc.? —– - No notable cyclicality or outliers

Seasonplot:

A seasonplot can help point out seasonal patterns, and help to glean insights over the years.

We’ll use feasts::gg_season() to create an exploratory seasonplot, which has month on the x-axis, energy consumption on the y-axis, and each year is its own series (mapped by line color).

## becuase we indexed by date above will use this automatically, do not need to specify
energy_ts %>% 
  gg_season(y = res_total) + ## using ggseason function
  theme_minimal() +
  scale_color_viridis_c() +
  labs(x = "month",
       y = "residential energy consumption (trillion BTU)")

This is really useful for us to explore both seasonal patterns, and how those seasonal patterns have changed over the years of this data (1973 - 2017).

Takeaways - The highest residential energy usage is around December / January / February - There is a secondary peak around July & August (that’s the repeated secondary peak we see in the original time series graph) - We can also see that the prevalence of that second peak has been increasing over the course of the time series: in 1973 (orange) there was hardly any summer peak. In more recent years (blue/magenta) that peak is much more prominent.

Let’s explore the data a couple more ways:

Subseries plot:

This function takes the data and explores it across the indexing variable in the tsiblle, in this case the date variable which consists of months across these years.

energy_ts %>% gg_subseries(res_total)

Our takeaway here is similar: - there is clear seasonality (higher values in winter months), with an increasingly evident second peak in June/July/August. - This reinforces our takeaways from the raw data and seasonplots.

Decomposition (here by STL)

See Rob Hyndman’s section on STL decomposition to learn how it compares to classical decomposition we did last week: “STL is a versatile and robust method for decomposing time series. STL is an acronym for “Seasonal and Trend decomposition using Loess”, while Loess is a method for estimating nonlinear relationships."

Notice that it allows seasonality to vary over time (a major difference from classical decomposition, and important here since we do see changes in seasonality).

# Find STL decomposition
dcmp <- energy_ts %>%
  model(STL(res_total ~ season())) ## tells it to model it using the STL model, modeling res_total as a function of season()

# View the components
#components(dcmp)

# Visualize the decomposed components
components(dcmp) %>% 
  autoplot() +
  theme_minimal()

NOTE: those grey bars on the side show relative scale of the total, trend, and seasonality relative to the remainder. A more clear example - note the residuals span a range of about -.5 to +.5, while the other components span larger variation:

Autocorrelation function (ACF)

We use the ACF to explore autocorrelation (here, we would expect seasonality to be clear from the ACF):

energy_ts %>% 
  ACF(res_total) %>% 
  autoplot()

## we see that observations separated by 12 months are the most highly correlated, reflecting strong seasonality we see in all of our other exploratory visualizations. 

D. Forecasting by Holt-Winters exponential smoothing

Note: here we use ETS, which technically uses different optimization than Holt-Winters exponential smoothing, but is otherwise the same (From Rob Hyndman: “The model is equivalent to the one you are fitting with HoltWinters(), although the parameter estimation in ETS() uses MLE.”)

To create the model below, we specify the model type (exponential smoothing, ETS), then tell it what type of seasonality it should assume using the season("") expression, where - “N” = non-seasonal (try changing it to this to see how unimpressive the forecast becomes!), - “A” = additive, - M" = multiplicative.

Here, we’ll say seasonality is multiplicative due to the change in variance over time and also within the secondary summer peak:

# Create the ETS model:
energy_fit <- energy_ts %>%
  model(
    ets = ETS(res_total ~ season("M")) ## tells it to use ETS funciton with seasonality as multiplicatiove
  )

# Forecast using the model 10 years into the future:
energy_forecast <- energy_fit %>% 
  forecast(h = "10 years")

# Plot just the forecasted values (with 80 & 95% CIs):
energy_forecast %>% 
  autoplot()

# Or plot it added to the original data:
energy_forecast %>% 
  autoplot(energy_ts)

Assessing residuals

We can use broom::augment() to append our original tsibble with what the model predicts the energy usage would be based on the model. Let’s do a little exploring through visualization.

First, use broom::augment() to get the predicted values & residuals:

# Append the predicted values (and residuals) to original energy data
energy_predicted <- broom::augment(energy_fit)

# Use View(energy_predicted) to see the resulting data frame

Now, plot the actual energy values (res_total), and the predicted values (stored as .fitted) atop them:

ggplot(data = energy_predicted) +
  geom_line(aes(x = date, y = res_total)) +
  geom_line(aes(x = date, y = .fitted), color = "red", alpha = .7)

## Cool, those look like pretty good predictions! 

Now let’s explore the residuals.

Remember, some important considerations: Residuals should be uncorrelated, centered at 0, and ideally normally distributed.

One way we can check the distribution is with a histogram:

## Remember, some important considerations: Residuals should be uncorrelated, centered at 0, and ideally normally distributed. 
ggplot(data = energy_predicted, aes(x = .resid)) +
  geom_histogram()

# We see that this looks relatively normally distributed, and centered at 0 (we could find summary statistics beyond this to further explore). 

We see that this looks relatively normally distributed, and centered at 0 (we could find summary statistics beyond this to further explore).

This is the END of what you are expected to complete for Part 1 on time series exploration and forecasting. Section E, below, shows how to use other forecasting models (seasonal naive and autoregressive integrated moving average, the latter which was not covered in ESM 244 this year).

E. Other forecasting methods (OPTIONAL SECTION - NOT REQUIRED)

There are a number of other forecasting methods and models!

You can learn more about ETS forecasting, seasonal naive (SNAIVE) and autoregressive integrated moving average (ARIMA) from Hyndman’s book - those are the models that I show below.

# Fit 3 different forecasting models (ETS, ARIMA, SNAIVE):
energy_fit_multi <- energy_ts %>%
  model(
    ets = ETS(res_total ~ season("M")),
    arima = ARIMA(res_total),
    snaive = SNAIVE(res_total)
  )

# Forecast 3 years into the future (from data end date)
multi_forecast <- energy_fit_multi %>% 
  forecast(h = "3 years")

# Plot the 3 forecasts
multi_forecast %>% 
  autoplot(energy_ts)

# Or just view the forecasts (note the similarity across models):
multi_forecast %>% 
  autoplot()

We can see that all three of these models (exponential smoothing, seasonal naive, and ARIMA) yield similar forecasting results.

End Part 1